Highway deep neural network (HDNN) is a type of depth-gated feedforwardneural network, which has shown to be easier to train with more hidden layersand also generalise better compared to conventional plain deep neural networks(DNNs). Previously, we investigated a structured HDNN architecture for speechrecognition, in which the two gate functions were tied across all the hiddenlayers, and we were able to train a much smaller model without sacrificing therecognition accuracy. In this paper, we carry on the study of this architecturewith sequence-discriminative training criterion and speaker adaptationtechniques on the AMI meeting speech recognition corpus. We show that these twotechniques improve speech recognition accuracy on top of the model trained withthe cross entropy criterion. Furthermore, we demonstrate that the two gatefunctions that are tied across all the hidden layers are able to control theinformation flow over the whole network, and we can achieve considerableimprovements by only updating these gate functions in both sequence trainingand adaptation experiments.
展开▼